大脑中的早期感觉系统迅速适应波动的输入统计,这需要神经元之间的反复通信。从机械上讲,这种复发的通信通常是间接的,并由局部中间神经元介导。在这项工作中,我们探讨了与直接复发连接相比,通过中间神经元进行反复通信的计算益处。为此,我们考虑了两个在统计上使其输入的数学上可行的复发性神经网络 - 一种具有直接复发连接,另一个带有介导经常性通信的中间神经元。通过分析相应的连续突触动力学并在数值上模拟网络,我们表明,具有中间神经元的网络比具有直接复发连接的网络更适合初始化,这是因为与Interneurons网络中的突触动态的收敛时间(RESS)(RESS)( 。直接复发连接)与它们初始化的频谱以对数(线性分析)进行对数缩放。我们的结果表明,中间神经元在计算上对于快速适应更改输入统计的有用。有趣的是,具有中间神经元的网络是通过直接复发连接网络的美白目标的过度参数化解决方案,因此我们的结果可以看作是在过度参数化的前馈线性线性网络中观察到的隐式加速现象的复发性神经网络模拟。
translated by 谷歌翻译
在Connectomics领域,主要问题是3D神经元分段。虽然基于深度学习的方法取得了显着的准确性,但仍然存在错误,特别是在具有图像缺陷的区域中。一种常见类型的缺陷是连续缺失的图像部分。这里的数据沿一些轴丢失,所得到的神经元分割在间隙上分开。为了解决这个问题,我们提出了一种基于神经元点云表示的新方法。我们将其作为分类问题和训练素材,是最先进的点云分类模型,以确定应该合并哪种神经元。我们表明我们的方法不仅强烈表现,而且可以合理地缩放到超出其他方法试图解决的问题。此外,我们的点云表示在数据方面是高效的,维持高性能,数据对于其他方法是不可行的。我们认为这是对其他校对任务使用点云表示的可行性的指标。
translated by 谷歌翻译
大脑毫不费力地解决了盲源分离(BSS)问题,但它使用的算法仍然难以捉摸。在信号处理中,线性BSS问题通常通过独立分量分析(ICA)来解决。为了用作生物电路的模型,ICA神经网络(NN)必须至少满足以下要求:1。算法必须在在线设置中运行,其中一次一次流流,NN计算数据示例源无效,无需存储内存中的任何大部分数据。 2.突触权重更新是局部的,即,它仅取决于突触附近存在的生物物理变量。在这里,我们为ICA提出了一种新颖的目标函数,我们从中获得了生物学似体的NN,包括神经结构和突触学习规则。有趣的是,我们的算法依赖于通过输出神经元的总活性调节突触可塑性。在大脑中,这可以通过神经调节剂,细胞外钙,局部场势或一氧化氮来实现。
translated by 谷歌翻译
电机控制中的一个主要问题是了解大脑计划的计划,并在面对延迟和嘈杂的刺激面前执行适当的运动。解决这种控制问题的突出框架是最佳反馈控制(OFC)。 OFC通过将嘈杂的感官刺激和使用卡尔曼滤波器或其扩展集成内部模型的预测来生成优化行为相关标准的控制操作。然而,缺乏Kalman滤波和控制的令人满意的神经模型,因为现有的提案具有以下限制:不考虑感官反馈的延迟,交替阶段的训练,以及需要了解噪声协方差矩阵,以及系统动态。此外,这些研究中的大多数考虑了卡尔曼滤波的隔离,而不是与控制联合。为了解决这些缺点,我们介绍了一种新的在线算法,它将自适应卡尔曼滤波与模型自由控制方法相结合(即,策略梯度算法)。我们在具有局部突触塑性规则的生物合理的神经网络中实现该算法。该网络执行系统识别和卡尔曼滤波,而无需多个阶段,具有不同的更新规则或噪声协方差的知识。在内部模型的帮助下,它可以使用延迟感官反馈执行状态估计。它在不需要任何信息知识的情况下了解控制政策,从而避免需要重量运输。通过这种方式,我们的OFC实施解决了在存在刺激延迟存在下生产适当的感官电动机控制所需的信用分配问题。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
Image classification with small datasets has been an active research area in the recent past. However, as research in this scope is still in its infancy, two key ingredients are missing for ensuring reliable and truthful progress: a systematic and extensive overview of the state of the art, and a common benchmark to allow for objective comparisons between published methods. This article addresses both issues. First, we systematically organize and connect past studies to consolidate a community that is currently fragmented and scattered. Second, we propose a common benchmark that allows for an objective comparison of approaches. It consists of five datasets spanning various domains (e.g., natural images, medical imagery, satellite data) and data types (RGB, grayscale, multispectral). We use this benchmark to re-evaluate the standard cross-entropy baseline and ten existing methods published between 2017 and 2021 at renowned venues. Surprisingly, we find that thorough hyper-parameter tuning on held-out validation data results in a highly competitive baseline and highlights a stunted growth of performance over the years. Indeed, only a single specialized method dating back to 2019 clearly wins our benchmark and outperforms the baseline classifier.
translated by 谷歌翻译
Dataset scaling, also known as normalization, is an essential preprocessing step in a machine learning pipeline. It is aimed at adjusting attributes scales in a way that they all vary within the same range. This transformation is known to improve the performance of classification models, but there are several scaling techniques to choose from, and this choice is not generally done carefully. In this paper, we execute a broad experiment comparing the impact of 5 scaling techniques on the performances of 20 classification algorithms among monolithic and ensemble models, applying them to 82 publicly available datasets with varying imbalance ratios. Results show that the choice of scaling technique matters for classification performance, and the performance difference between the best and the worst scaling technique is relevant and statistically significant in most cases. They also indicate that choosing an inadequate technique can be more detrimental to classification performance than not scaling the data at all. We also show how the performance variation of an ensemble model, considering different scaling techniques, tends to be dictated by that of its base model. Finally, we discuss the relationship between a model's sensitivity to the choice of scaling technique and its performance and provide insights into its applicability on different model deployment scenarios. Full results and source code for the experiments in this paper are available in a GitHub repository.\footnote{https://github.com/amorimlb/scaling\_matters}
translated by 谷歌翻译
The devastation caused by the coronavirus pandemic makes it imperative to design automated techniques for a fast and accurate detection. We propose a novel non-invasive tool, using deep learning and imaging, for delineating COVID-19 infection in lungs. The Ensembling Attention-based Multi-scaled Convolution network (EAMC), employing Leave-One-Patient-Out (LOPO) training, exhibits high sensitivity and precision in outlining infected regions along with assessment of severity. The Attention module combines contextual with local information, at multiple scales, for accurate segmentation. Ensemble learning integrates heterogeneity of decision through different base classifiers. The superiority of EAMC, even with severe class imbalance, is established through comparison with existing state-of-the-art learning models over four publicly-available COVID-19 datasets. The results are suggestive of the relevance of deep learning in providing assistive intelligence to medical practitioners, when they are overburdened with patients as in pandemics. Its clinical significance lies in its unprecedented scope in providing low-cost decision-making for patients lacking specialized healthcare at remote locations.
translated by 谷歌翻译
Objective: Imbalances of the electrolyte concentration levels in the body can lead to catastrophic consequences, but accurate and accessible measurements could improve patient outcomes. While blood tests provide accurate measurements, they are invasive and the laboratory analysis can be slow or inaccessible. In contrast, an electrocardiogram (ECG) is a widely adopted tool which is quick and simple to acquire. However, the problem of estimating continuous electrolyte concentrations directly from ECGs is not well-studied. We therefore investigate if regression methods can be used for accurate ECG-based prediction of electrolyte concentrations. Methods: We explore the use of deep neural networks (DNNs) for this task. We analyze the regression performance across four electrolytes, utilizing a novel dataset containing over 290000 ECGs. For improved understanding, we also study the full spectrum from continuous predictions to binary classification of extreme concentration levels. To enhance clinical usefulness, we finally extend to a probabilistic regression approach and evaluate different uncertainty estimates. Results: We find that the performance varies significantly between different electrolytes, which is clinically justified in the interplay of electrolytes and their manifestation in the ECG. We also compare the regression accuracy with that of traditional machine learning models, demonstrating superior performance of DNNs. Conclusion: Discretization can lead to good classification performance, but does not help solve the original problem of predicting continuous concentration levels. While probabilistic regression demonstrates potential practical usefulness, the uncertainty estimates are not particularly well-calibrated. Significance: Our study is a first step towards accurate and reliable ECG-based prediction of electrolyte concentration levels.
translated by 谷歌翻译
Candidate axiom scoring is the task of assessing the acceptability of a candidate axiom against the evidence provided by known facts or data. The ability to score candidate axioms reliably is required for automated schema or ontology induction, but it can also be valuable for ontology and/or knowledge graph validation. Accurate axiom scoring heuristics are often computationally expensive, which is an issue if you wish to use them in iterative search techniques like level-wise generate-and-test or evolutionary algorithms, which require scoring a large number of candidate axioms. We address the problem of developing a predictive model as a substitute for reasoning that predicts the possibility score of candidate class axioms and is quick enough to be employed in such situations. We use a semantic similarity measure taken from an ontology's subsumption structure for this purpose. We show that the approach provided in this work can accurately learn the possibility scores of candidate OWL class axioms and that it can do so for a variety of OWL class axioms.
translated by 谷歌翻译